188 research outputs found

    Microfluidic Pumping With Surface Tension Force and Magnetohydrodynamic Drive

    Get PDF
    Micropumping is difficult to design and control as compared to their macro-scale counterparts due to the size limitation. The first part of this dissertation focuses on micropumping with surface tension forces. A simple, single-action, capillary pump/valve consisting of a bi-phase slug confined in a non-uniform conduit is described. At low temperatures, the slug is solid and seals the conduit. Once heated above its melting temperature, the liquid slug moves spontaneously along a predetermined path due to surface tension forces imbalance. This technique can be easily combined with other propulsion mechanisms such as pressure and magnetohydrodynamics (MHD). The second part of this dissertation focuses on MHD micropumping, which provides a convenient, programmable means for propelling liquids and controlling fluid flow without a need for mechanical pumps and valves. Firstly, we examined the response of a model one dimensional electrochemical thin film to time-independent and time-dependent applied polarizations, using the Nernst-Planck (NP) model with electroneutrality and the Poisson-Nernst-Planck (PNP) model without electro -neutrality, respectively. The NP model with well designed boundary conditions was v developed, proved capable of describing the bulk behavior as accurate as the full PNP model. Secondly, we studied the MHD propelled liquid motion in a uniform conduit patterned with cylinders. We proved equivalence in MHD and pressure driven flow patterns under certain conditions. We examined the effect of interior obstacles on the electric current flow in the conduit and showed the existence of particular pillar geometry that maximizes the current. Thirdly, we looked at MHD flow of a binary electrolyte between concentric cylinders. The base flow was similar to the pressure driven flow in the same setup. The first order perturbation fields, however, behave differently as the traditional Dean’s flow. We carried out one-dimensional linear stability analysis for the unbounded small gap situation and solved it as an eigenvalue problem. Two-dimensional nonlinear simulation was performed for finite gap size or bounded situations. We observed strong directionality of the applied electric field for the onset of stability. Results in this study could help enhance the stability of the system or introduce secondary motion depending on the nature of the applications

    When MHD-Based Microfluidics is Equivalent to Pressure-Driven Flow

    Get PDF
    Magnetohydrodynamics (MHD) provides a convenient, programmable means for propelling liquids and controlling fluid flow in microfluidic devices without a need for mechanical pumps and valves. When the magnetic field is uniform and the electric field in the electrolyte solution is confined to a plane that is perpendicular to the direction of the magnetic field, the Lorentz body force is irrotational and one can define a “Lorentz” potential. Since the MHD-induced flow field under these circumstances is identical to that of pressure-driven flow, one can utilize the large available body of knowledge about pressure-driven flows to predict MHD flows and infer MHD flow patterns. In this note, we prove the equivalence between MHD flows and pressure-driven flows under certain conditions other than flow in straight conduits with rectangular cross-sections. We determine the velocity profile and the efficiency of MHD pumps, accounting for current transport in the electrolyte solutions. Then, we demonstrate how data available for pressure driven flow can be utilized to study various MHD flows, in particular, in a conduit patterned with pillars such as may be useful for liquid chromatography and chemical reactors. Additionally, we examine the effect of interior obstacles on the electric current flow in the conduit and show the existence of a particular pillar geometry that maximizes the current

    ACCELERATING STORAGE APPLICATIONS WITH EMERGING KEY VALUE STORAGE DEVICES

    Get PDF
    With the continuous data explosion in the big data era, traditional software and hardware stack are facing unprecedented challenges on how to operate on such data scale. Thus, designing new architectures and efficient systems for data oriented applications has become increasingly critical. This motivates us to re-think of the conventional storage system design and re-architect both software and hardware to meet the challenges of scale. Besides the fast growth of data volume, the increasing demand on storage applications such as video streaming, data analytics are pushing high performance flash based storage devices to replace the traditional spinning disks. Such all-flash era increase the data reliability concerns due to the endurance problem of flash devices. Key-value stores (KVS) are important storage infrastructure to handle the fast growing unstructured data and have been widely deployed in a variety of scale-out enterprise applications such as online retail, big data analytic, social networks, etc. How to efficiently manage data redundancy for key-value stores to provide data reliability, how to efficiently support range query for key-value stores to accelerate analytic oriented applications under emerging key-value store system architecture become an important research problem. In this research, we focus on how to design new software hardware architectures for the keyvalue store applications to provide reliability and improve query performance. In order to address the different issues identified in this dissertation, we propose to employ a logical key management layer, a thin layer above the KV devices that maps logical keys into phsyical keys on the devices. We show how such a layer can enable multiple solutions to improve the performance and reliability of KVSSD based storage systems. First, we present KVRAID, a high performance, write efficient erasure coding management scheme on emerging key-value SSDs. The core innovation of KVRAID is to propose a logical key management layer that maps logical keys to physical keys to efficiently pack similar size KV objects and dynamically manage the membership of erasure coding groups. Unlike existing schemes which manage erasure codes on the block level, KVRAID manages the erasure codes on the KV object level. In order to achieve better storage efficiency for variable sized objects, KVRAID predefines multiple fixed sizes (slabs) according to the object size distribution for the erasure code. KVRAID uses a logical to physical key conversion to pack the KV objects of similar size into a parity group. KVRAID uses a lazy deletion mechanism with a garbage collector for object updates. Our experiments show that in 100% put case, KVRAID outperforms software block RAID by 18x in case of throughput and reduces 15x write amplification (WAF) with only ~5% CPU utilization. In a mixed update/get workloads, KVRAID achieves ~4x better throughput with ~23% CPU utilization and reduces the storage overhead and WAF by 3.6x and 11.3x in average respectively. Second, we present KVRangeDB, an ordered log structure tree based key index that supports range queries on a hash-based KVSSD. In addition, we propose to pack smaller application records into a larger physical record on the device through the logical key management layer. We compared the performance of KVRangeDB against RocksDB implementation on KVSSD and stateof- art software KV-store Wisckey on block device, on three types of real world applications of cloud-serving workloads, TABLEFS filesystem and time-series databases. For cloud serving applications, KVRangeDB achieves 8.3x and 1.7x better 99.9% write tail latency respectively compared to RocksDB implementation on KV-SSD and Wisckey on block SSD. On the query side, KVrangeDB only performs worse for those very long scans, but provides fast point queries and closed range queries. The experiments on TABLEFS demonstrate that using KVRangeDB for metadata indexing can boost the performance by a factor of ~6.3x in average and reduce ~3.9x CPU cost for four metadata-intensive workloads compared to RocksDB implementation on KVSSD. Compared toWisckey, KVRangeDB improves performance by ~2.6x in average and reduces ~1.7x CPU usage. Third, we propose a generic FPGA accelerator for emerging Minimum Storage Regenerating (MSR) codes encoding/decoding which maximizes the computation parallelism and minimizes the data movement between off-chip DRAM and the on-chip SRAM buffers. To demonstrate the efficiency of our proposed accelerator, we implemented the encoding/decoding algorithms for a specific MSR code called Zigzag code on Xilinx VCU1525 acceleration card. Our evaluation shows our proposed accelerator can achieve ~2.4-3.1x better throughput and ~4.2-5.7x better power efficiency compared to the state-of-art multi-core CPU implementation and ~2.8-3.3x better throughput and ~4.2-5.3x better power efficiency compared to a modern GPU accelerato

    Structural similarity loss for learning to fuse multi-focus images

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Convolutional neural networks have recently been used for multi-focus image fusion. However, some existing methods have resorted to adding Gaussian blur to focused images, to simulate defocus, thereby generating data (with ground-truth) for supervised learning. Moreover, they classify pixels as ‘focused’ or ‘defocused’, and use the classified results to construct the fusion weight maps. This then necessitates a series of post-processing steps. In this paper, we present an end-to-end learning approach for directly predicting the fully focused output image from multi-focus input image pairs. The suggested approach uses a CNN architecture trained to perform fusion, without the need for ground truth fused images. The CNN exploits the image structural similarity (SSIM) to calculate the loss, a metric that is widely accepted for fused image quality evaluation. What is more, we also use the standard deviation of a local window of the image to automatically estimate the importance of the source images in the final fused image when designing the loss function. Our network can accept images of variable sizes and hence, we are able to utilize real benchmark datasets, instead of simulated ones, to train our network. The model is a feed-forward, fully convolutional neural network that can process images of variable sizes during test time. Extensive evaluation on benchmark datasets show that our method outperforms, or is comparable with, existing state-of-the-art techniques on both objective and subjective benchmarks

    Self-supervised learning to detect key frames in videos

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Detecting key frames in videos is a common problem in many applications such as video classification, action recognition and video summarization. These tasks can be performed more efficiently using only a handful of key frames rather than the full video. Existing key frame detection approaches are mostly designed for supervised learning and require manual labelling of key frames in a large corpus of training data to train the models. Labelling requires human annotators from different backgrounds to annotate key frames in videos which is not only expensive and time consuming but also prone to subjective errors and inconsistencies between the labelers. To overcome these problems, we propose an automatic self-supervised method for detecting key frames in a video. Our method comprises a two-stream ConvNet and a novel automatic annotation architecture able to reliably annotate key frames in a video for self-supervised learning of the ConvNet. The proposed ConvNet learns deep appearance and motion features to detect frames that are unique. The trained network is then able to detect key frames in test videos. Extensive experiments on UCF101 human action and video summarization VSUMM datasets demonstrates the effectiveness of our proposed method

    Solids velocity measurement using electric capacitance sensor assemblies

    Get PDF
    This paper covers the application of the cross-correlation method for measuring the velocity of solid particles. Capacitive electrodes are used as primary sensors to measure the time required by the solid particles to cover a known distance between the electrodes. The capacitive variations of both the electrode sensors are stored in computer synchronously for offline estimation of the velocity of the solid particles. Single glass marble is used as a solid particle to conduct the performance in the lab for its good signal to noise ratio (SNR) and vivid graphs. Matlab R2018a is used as a programming tool to perform the cross-correlation algorithm. The distance between the sensors is adjusted optimally. The results were realistic to ensure the correctness of the system

    Capacitive sensor and its calibration: A technique for the estimation of solid particles flow concentration

    Get PDF
    The precise and accurate measurement of flow rate in the batch flow of the solid particles is of primary importance in many process industries for the improvement of the efficiency of the system. Many techniques developed for the measurement of mass flow rate. The capacitive sensors has a significance of being non-invasive, higher accuracy and low cost for mass flow measurement despite the fact that many factors adversely affect the performance- including non-uniform flow, multiphase flow, temperature, pressure, and moisture in the solid particles. This paper covers preliminary investigations of the offline estimation of mass flow concentration based upon the calibration of capacitance electrodes to quantify the mass of dielectric as a function of capacitance variation between the electrodes
    corecore